Download Oracle Big Data 2016 Implementation Essentials.1z0-449.BrainDumps.2018-11-29.42q.vcex

Vendor: Oracle
Exam Code: 1z0-449
Exam Name: Oracle Big Data 2016 Implementation Essentials
Date: Nov 29, 2018
File Size: 499 KB

How to open VCEX files?

Files with VCEX extension can be opened by ProfExam Simulator.

Purchase
Coupon: EXAM_HUB

Discount: 20%

Demo Questions

Question 1
The hdfs_stream script is used by the Oracle SQL Connector for HDFS to perform a specific task to access data. 
What is the purpose of this script? 
  1. It is the preprocessor script for the Impala table.
  2. It is the preprocessor script for the HDFS external table.
  3. It is the streaming script that creates a database directory.
  4. It is the preprocessor script for the Oracle partitioned table.
  5. It defines the jar file that points to the directory where Hive is installed.
Correct answer: B
Explanation:
The hdfs_stream script is the preprocessor for the Oracle Database external table created by Oracle SQL Connector for HDFS. References: https://docs.oracle.com/cd/E37231_01/doc.20/e36961/start.htm#BDCUG107
The hdfs_stream script is the preprocessor for the Oracle Database external table created by Oracle SQL Connector for HDFS. 
References: https://docs.oracle.com/cd/E37231_01/doc.20/e36961/start.htm#BDCUG107
Question 2
How should you encrypt the Hadoop data that sits on disk?
  1. Enable Transparent Data Encryption by using the Mammoth utility.
  2. Enable HDFS Transparent Encryption by using bdacli on a Kerberos-secured cluster.
  3. Enable HDFS Transparent Encryption on a non-Kerberos secured cluster.
  4. Enable Audit Vault and Database Firewall for Hadoop by using the Mammoth utility.
Correct answer: B
Explanation:
HDFS Transparent Encryption protects Hadoop data that’s at rest on disk. When the encryption is enabled for a cluster, data write and read operations on encrypted zones (HDFS directories) on the disk are automatically encrypted and decrypted. This process is “transparent” because it’s invisible to the application working with the data. The cluster where you want to use HDFS Transparent Encryption must have Kerberos enabled. Incorrect Answers:D: The cluster where you want to use HDFS Transparent Encryption must have Kerberos enabled.References: https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/using-hdfs-transparent-encryption.html#GUID-16649C5A-2C88-4E75-809A-BBF8DE250EA3
HDFS Transparent Encryption protects Hadoop data that’s at rest on disk. When the encryption is enabled for a cluster, data write and read operations on encrypted zones (HDFS directories) on the disk are automatically encrypted and decrypted. This process is “transparent” because it’s invisible to the application working with the data. 
The cluster where you want to use HDFS Transparent Encryption must have Kerberos enabled. 
Incorrect Answers:
D: The cluster where you want to use HDFS Transparent Encryption must have Kerberos enabled.
References: https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/using-hdfs-transparent-encryption.html#GUID-16649C5A-2C88-4E75-809A-BBF8DE250EA3
Question 3
What two things does the Big Data SQL push down to the storage cell on the Big Data Appliance? (Choose two.)
  1. Transparent Data Encrypted data
  2. the column selection of data from individual Hadoop nodes
  3. WHERE clause evaluations
  4. PL/SQL evaluation
  5. Business Intelligence queries from connected Exalytics servers
Correct answer: AB
Question 4
Your customer has an older starter rack Big Data Appliance (BDA) that was purchased in 2013. The customer would like to know what the options are for growing the storage footprint of its server. Which two options are valid for expanding the customer’s BDA footprint? (Choose two.)
  1. Elastically expand the footprint by adding additional high capacity nodes.
  2. Elastically expand the footprint by adding additional Big Data Oracle Database Servers.
  3. Elastically expand the footprint by adding additional Big Data Storage Servers. 
  4. Racks manufactured before 2014 are no longer eligible for expansion.
  5. Upgrade to a full 18-node Big Data Appliance.
Correct answer: DE
Question 5
  
What are three correct results of executing the preceding query? (Choose three.)
  1. Values longer than 100 characters for the DESCRIPTION column are truncated.
  2. ORDER_LINE_ITEM_COUNT in the HDFS file matches ITEM_CNT in the external table. 
  3. ITEM_CNT in the HDFS file matches ORDER_LINE_ITEM_COUNT in the external table.
  4. Errors in the data for CUST_NUM or ORDER_NUM set the value to INVALID_NUM.
  5. Errors in the data for CUST_NUM or ORDER_NUM set the value to 0000000000.
  6. Values longer than 100 characters for any column are truncated.
Correct answer: ACD
Explanation:
com.oracle.bigdata.overflow: Truncates string data. Values longer than 100 characters for the DESCRIPTION column are truncated.com.oracle.bigdata.overflow: Truncates string data. Values longer than 100 characters for the DESCRIPTION column are truncated.com.oracle.bigdata.erroropt: Replaces bad data. Errors in the data for  CUST_NUM or ORDER_NUM set the value to INVALID_NUM.References: https://docs.oracle.com/cd/E55905_01/doc.40/e55814/bigsql.htm#BIGUG76679
com.oracle.bigdata.overflow: Truncates string data. Values longer than 100 characters for the DESCRIPTION column are truncated.
com.oracle.bigdata.overflow: Truncates string data. Values longer than 100 characters for the DESCRIPTION column are truncated.
com.oracle.bigdata.erroropt: Replaces bad data. Errors in the data for  CUST_NUM or ORDER_NUM set the value to INVALID_NUM.
References: https://docs.oracle.com/cd/E55905_01/doc.40/e55814/bigsql.htm#BIGUG76679
Question 6
What does the following line do in Apache Pig? 
products = LOAD ‘/user/oracle/products’ AS (prod_id, item);
  1. The products table is loaded by using data pump with prod_id and item.
  2. The LOAD table is populated with prod_id and item.
  3. The contents of /user/oracle/products are loaded as tuples and aliased to products.
  4. The contents of /user/oracle/products are dumped to the screen.
Correct answer: C
Explanation:
The LOAD function loads data from the file system. Syntax: LOAD 'data' [USING function] [AS schema];Terms: 'data'The name of the file or directory, in single quote References: https://pig.apache.org/docs/r0.11.1/basic.html#load
The LOAD function loads data from the file system. 
Syntax: LOAD 'data' [USING function] [AS schema];
Terms: 'data'
The name of the file or directory, in single quote 
References: https://pig.apache.org/docs/r0.11.1/basic.html#load
Question 7
What is the output of the following six commands when they are executed by using the Oracle XML Extensions for Hive in the Oracle XQuery for Hadoop Connector? 
1. $ echo "xxx" > src.txt 
2. $ hive --auxpath $OXH_HOME/hive/lib -i $OXH_HOME/hive/init.sql 
3. hive> CREATE TABLE src (dummy STRING); 
4. hive> LOAD DATA LOCAL INPATH 'src.txt' OVERWRITE INTO TABLE src; 
5. hive> SELECT * FROM src; 
         OK 
        xxx 
6. hive> SELECT xml_query ("x/y", "<x><y>123</y><z>456</z></x>") FROM src;
  1. xyz
  2. 123
  3. 456
  4. xxx
  5. x/y
Correct answer: B
Explanation:
Using the Hive Extensions To enable the Oracle XQuery for Hadoop extensions, use the --auxpath and -i arguments when starting Hive:$ hive --auxpath $OXH_HOME/hive/lib -i $OXH_HOME/hive/init.sql The first time you use the extensions, verify that they are accessible. The following procedure creates a table named  SRC, loads one row into it, and calls the xml_query function. To verify that the extensions are accessible: 1. Log in to an Oracle Big Data Appliance server where you plan to work. 2. Create a text file named src.txt that contains one line:$ echo "XXX" > src.txt 3. Start the Hive command-line interface (CLI):$ hive --auxpath $OXH_HOME/hive/lib -i $OXH_HOME/hive/init.sql The init.sql file contains the CREATE TEMPORARY FUNCTION statements that declare the XML functions. 4. Create a simple table:hive> CREATE TABLE src(dummy STRING); The SRC table is needed only to fulfill a SELECT syntax requirement. It is like the DUAL table in Oracle Database, which is referenced in SELECT statements to test SQL functions. 5. Load data from src.txt into the table:hive> LOAD DATA LOCAL INPATH 'src.txt' OVERWRITE INTO TABLE src; 6. Query the table using Hive SELECT statements:hive> SELECT * FROM src; OK xxx 7. Call an Oracle XQuery for Hadoop function for Hive. This example calls the xml_query function to parse an XML string:hive> SELECT xml_query("x/y", "<x><y>123</y><z>456</z></x>") FROM src;      .      .      . ["123"] If the extensions are accessible, then the query returns ["123"], as shown in the example References: https://docs.oracle.com/cd/E53356_01/doc.30/e53067/oxh_hive.htm#BDCUG693
Using the Hive Extensions 
To enable the Oracle XQuery for Hadoop extensions, use the --auxpath and -i arguments when starting Hive:
$ hive --auxpath $OXH_HOME/hive/lib -i $OXH_HOME/hive/init.sql 
The first time you use the extensions, verify that they are accessible. The following procedure creates a table named  SRC, loads one row into it, and calls the xml_query function. 
To verify that the extensions are accessible: 
1. Log in to an Oracle Big Data Appliance server where you plan to work. 
2. Create a text file named src.txt that contains one line:
$ echo "XXX" > src.txt 
3. Start the Hive command-line interface (CLI):
$ hive --auxpath $OXH_HOME/hive/lib -i $OXH_HOME/hive/init.sql 
The init.sql file contains the CREATE TEMPORARY FUNCTION statements that declare the XML functions. 
4. Create a simple table:
hive> CREATE TABLE src(dummy STRING); 
The SRC table is needed only to fulfill a SELECT syntax requirement. It is like the DUAL table in Oracle Database, which is referenced in SELECT statements to test SQL functions. 
5. Load data from src.txt into the table:
hive> LOAD DATA LOCAL INPATH 'src.txt' OVERWRITE INTO TABLE src; 
6. Query the table using Hive SELECT statements:
hive> SELECT * FROM src; 
OK 
xxx 
7. Call an Oracle XQuery for Hadoop function for Hive. This example calls the xml_query function to parse an XML string:
hive> SELECT xml_query("x/y", "<x><y>123</y><z>456</z></x>") FROM src; 
     . 
     . 
     . 
["123"] 
If the extensions are accessible, then the query returns ["123"], as shown in the example 
References: https://docs.oracle.com/cd/E53356_01/doc.30/e53067/oxh_hive.htm#BDCUG693
Question 8
The NoSQL KVStore experiences a node failure. One of the replicas is promoted to primary. 
How will the NoSQL client that accesses the store know that there has been a change in the architecture?
  1. The KVLite utility updates the NoSQL client with the status of the master and replica.
  2. KVStoreConfig sends the status of the master and replica to the NoSQL client.
  3. The NoSQL admin agent updates the NoSQL client with the status of the master and replica.
  4. The Shard State Table (SST) contains information about each shard and the master and replica status for the shard.
Correct answer: D
Explanation:
Given a shard, the Client Driver next consults the Shard State Table (SST). For each shard, the SST contains information about each replication node comprising the group (step 5). Based upon information in the SST, such as the identity of the master and the load on the various nodes in a shard, the Client Driver selects the node to which to send the request and forwards the request to the appropriate node. In this case, since we are issuing a write operation, the request must go to the master node. Note: If the machine hosting the master should fail in any way, then the master automatically fails over to one of the other nodes in the shard. That is, one of the replica nodes is automatically promoted to master. References: http://www.oracle.com/technetwork/products/nosqldb/learnmore/nosql-wp-1436762.pdf
Given a shard, the Client Driver next consults the Shard State Table (SST). For each shard, the SST contains information about each replication node comprising the group (step 5). Based upon information in the SST, such as the identity of the master and the load on the various nodes in a shard, the Client Driver selects the node to which to send the request and forwards the request to the appropriate node. In this case, since we are issuing a write operation, the request must go to the master node. 
Note: If the machine hosting the master should fail in any way, then the master automatically fails over to one of the other nodes in the shard. That is, one of the replica nodes is automatically promoted to master. 
References: http://www.oracle.com/technetwork/products/nosqldb/learnmore/nosql-wp-1436762.pdf
Question 9
Your customer is experiencing significant degradation in the performance of Hive queries. The customer wants to continue using SQL as the main query language 
for the HDFS store. 
Which option can the customer use to improve performance? 
  1. native MapReduce Java programs
  2. Impala
  3. HiveFastQL
  4. Apache Grunt
Correct answer: B
Explanation:
Cloudera Impala is Cloudera's open source massively parallel processing (MPP) SQL query engine for data stored in a computer cluster running Apache Hadoop. Impala brings scalable parallel database technology to Hadoop, enabling users to issue low-latency SQL queries to data stored in HDFS and Apache HBase without requiring data movement or transformation. References: https://en.wikipedia.org/wiki/Cloudera_Impala
Cloudera Impala is Cloudera's open source massively parallel processing (MPP) SQL query engine for data stored in a computer cluster running Apache Hadoop. 
Impala brings scalable parallel database technology to Hadoop, enabling users to issue low-latency SQL queries to data stored in HDFS and Apache HBase without requiring data movement or transformation. 
References: https://en.wikipedia.org/wiki/Cloudera_Impala
Question 10
Your customer’s Oracle NoSQL store has a replication factor of 3. One of the customer’s replica nodes goes down. 
What will be the long-term performance impact on the customer’s NoSQL database if the node is replaced?
  1. There will be no performance impact. 
  2. The database read performance will be impacted.
  3. The database read and write performance will be impacted.
  4. The database will be unavailable for reading or writing.
  5. The database write performance will be impacted.
Correct answer: C
Explanation:
The number of nodes belonging to a shard is called its Replication Factor. The larger a shard's Replication Factor, the faster its read throughput (because there are more machines to service the read requests) but the slower its write performance (because there are more machines to which writes must be copied). Note: Replication Nodes are organized into shards. A shard contains a single Replication Node which is responsible for performing database writes, and which copies those writes to the other Replication Nodes in the shard. This is called the master node. All other Replication Nodes in the shard are used to service read-only operations. References: https://docs.oracle.com/cd/E26161_02/html/GettingStartedGuide/introduction.html#replicationfactor
The number of nodes belonging to a shard is called its Replication Factor. The larger a shard's Replication Factor, the faster its read throughput (because there are more machines to service the read requests) but the slower its write performance (because there are more machines to which writes must be copied). 
Note: Replication Nodes are organized into shards. A shard contains a single Replication Node which is responsible for performing database writes, and which copies those writes to the other Replication Nodes in the shard. This is called the master node. All other Replication Nodes in the shard are used to service read-only operations. 
References: https://docs.oracle.com/cd/E26161_02/html/GettingStartedGuide/introduction.html#replicationfactor
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!